147 research outputs found

    How Robust is Robust Control in the Time Domain?

    Get PDF
    By applying robust control the decision maker wants to make good decisions when his model is only a good approximation of the true one. Such decisions are said to be robust to model misspecification. In this paper it is shown that both a “probabilistically sophisticated” and a non-“probabilistically sophisticated” decision maker applying robust control in the time domain are indeed assuming a very special kind of “misspecification of the approximating model.” This is true when unstructured uncertainty à la Hansen and Sargent is used or when uncertainty is related to unknown structural parameters of the modelLinear quadratic tracking problem, optimal control, robust control, time-varying parameters

    Evaluating Research Activity:Impact Factor vs. Research Factor

    Get PDF
    The Impact Factor (IF) “has moved ... from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers, and even the institution they work in” ([2], p. 1). However, the use of this index for evaluating individual scientists is dubious. The present work compares the ranking of research units generated by the Research Factor (RF) index with that associated with the popular IF. The former, originally introduced in [38], reflects article and book publications and a host of other activities categorized as coordination activities (e.g., conference organization, research group coordination), dissemination activities (e.g., conference and seminar presentations, participation in research group), editorial activities (e.g., journal editor, associate editor, referee) and functional activities (e.g., Head of Department). The main conclusion is that by replacing the IF with the RF in hiring, tenure decisions and awarding of grants would greatly increase the number of topics investigated and the number and quality of long run projects.scientific research assessment, Impact Factor, bibliometric indices, feasible Research Factor

    The usual robust control framework in discrete time: some interesting results

    Get PDF
    By applying robust control the decision maker wants to make good decisions when his model is only a good approximation of the true one. Such decisions are said to be robust to model misspecification. In this paper it is shown that the application of the usual robust control framework in discrete time problems is associated with some interesting, if not unexpected, results. Results that have far reaching consequences when robust control is applied sequentially, say every year in fiscal policy or every quarter (month) in monetary policy. This is true when unstructured uncertainty à la Hansen and Sargent is used, both in the case of a “probabilistically sophisticated” and a non-“probabilistically sophisticated” decision maker, or when uncertainty is related to unknown structural parameters of the model

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive the closed loop form of the Expected Optimal Feedback rule, sometimes called passive learning stochastic control, with time varying parameters. As such this paper extends the work of Kendrick (1981,2002, Chapter 6) where parameters are assumed to vary randomly around a known constant mean. Furthermore, we show that the cautionary myopic rule in Beck and Wieland (2002) model, a test bed for comparing various stochastic optimizations approaches, can be cast into this framework and can be treated as a special case of this solution.Optimal experimentation, stochastic optimization, time-varying parameters, expected optimal feedback

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive, by using dynamic programming, the closed loop form of the Expected Optimal Feedback rule with time varying parameter. As such this paper extends the work of Kendrick (1981, 2002, Chapter 6) for the time varying parameter case. Furthermore, we show that the Beck and Wieland (2002) model can be cast into this framework and can be treated as a special case of this solution.

    The Dual Approch in an Infinite Horizon Model with a Time-Varying Parameter

    Get PDF
    In a previous paper Amman and Tucci (2017) discuss the DUAL control method, based on Tse and Bar-Shalom (1973) and (Kendrick, 1981) seminal works, applied to the BMW infinite horizon model with an unknown but constant parameter. In these pages the DUAL solution to the BMW infinite horizon model with one time-varying parameter is reported. The special case where the desired path for the state and control are set equal to 0 and the linear system has no constant is considered. The appropriate Riccati quantities for the augmented system are derived and the time- invariant feedback rule are defined following the same steps as in Amman and Tucci (2017). Finally the new approximate cost-to-go is presented. Two cases are considered. In the first one the optimal control is selected using the updated estimate of the time-varying parameter in the model. In the second one only an old estimate of that parameter is available at the time the decision maker chooses her/his control. For the reader’s sake, most of the technical derivations are confined to a number of short appendices

    How Active is Active Learning: Value Function Method Versus an Approximation Method

    Get PDF
    In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learn- ing), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    L’ “R-Factor”: un nuovo modo di valutare la ricerca scientifica

    Get PDF
    As pointed out in Amin e Mabe (2000, p. 1), the journal impact factor (IF) “has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers, and even the institution they work in.” However, the use of this index for evaluating individual scientists is dubious and may “skew the course of scientific research” (Monastersky, 2005, p, 1). Moreover the IF is, at most, able to measure only the quality of a very restricted range of research activities: namely, publishing journal articles. In the present work a new indicator of a researcher quality, named the Researcher Impact Factor (RF), is introduced. It is constructed as a function of the number and quality of publications (articles, books and working papers) and of the “other activities” usually associated with being a researcher (attending and/or organizing conferences, being the Editor, Associate Editor or referee for a journal, teaching or supervising at graduate level, coordinating research groups and so on). To show the characteristics of the new index, a numerical example is carried out to rank two hypothetical scientists. The main conclusion is that by replacing the IF with the RF in hiring, tenure decisions and awarding of grants would greatly increase the number of topics investigated and the number and quality of long run projects. The Excel spreadsheet used for the computations is available on demand from the authors.Impact factor, bibliometric indices, research evaluation, researcher impact factor

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control
    • 

    corecore